350 research outputs found
Fisheye Consistency: Keeping Data in Synch in a Georeplicated World
Over the last thirty years, numerous consistency conditions for replicated
data have been proposed and implemented. Popular examples of such conditions
include linearizability (or atomicity), sequential consistency, causal
consistency, and eventual consistency. These consistency conditions are usually
defined independently from the computing entities (nodes) that manipulate the
replicated data; i.e., they do not take into account how computing entities
might be linked to one another, or geographically distributed. To address this
lack, as a first contribution, this paper introduces the notion of proximity
graph between computing nodes. If two nodes are connected in this graph, their
operations must satisfy a strong consistency condition, while the operations
invoked by other nodes are allowed to satisfy a weaker condition. The second
contribution is the use of such a graph to provide a generic approach to the
hybridization of data consistency conditions into the same system. We
illustrate this approach on sequential consistency and causal consistency, and
present a model in which all data operations are causally consistent, while
operations by neighboring processes in the proximity graph are sequentially
consistent. The third contribution of the paper is the design and the proof of
a distributed algorithm based on this proximity graph, which combines
sequential consistency and causal consistency (the resulting condition is
called fisheye consistency). In doing so the paper not only extends the domain
of consistency conditions, but provides a generic provably correct solution of
direct relevance to modern georeplicated systems
Progressive Transactional Memory in Time and Space
Transactional memory (TM) allows concurrent processes to organize sequences
of operations on shared \emph{data items} into atomic transactions. A
transaction may commit, in which case it appears to have executed sequentially
or it may \emph{abort}, in which case no data item is updated.
The TM programming paradigm emerged as an alternative to conventional
fine-grained locking techniques, offering ease of programming and
compositionality. Though typically themselves implemented using locks, TMs hide
the inherent issues of lock-based synchronization behind a nice transactional
programming interface.
In this paper, we explore inherent time and space complexity of lock-based
TMs, with a focus of the most popular class of \emph{progressive} lock-based
TMs. We derive that a progressive TM might enforce a read-only transaction to
perform a quadratic (in the number of the data items it reads) number of steps
and access a linear number of distinct memory locations, closing the question
of inherent cost of \emph{read validation} in TMs. We then show that the total
number of \emph{remote memory references} (RMRs) that take place in an
execution of a progressive TM in which concurrent processes perform
transactions on a single data item might reach , which
appears to be the first RMR complexity lower bound for transactional memory.Comment: Model of Transactional Memory identical with arXiv:1407.6876,
arXiv:1502.0272
Time-Efficient Read/Write Register in Crash-prone Asynchronous Message-Passing Systems
The atomic register is certainly the most basic object of computing science.
Its implementation on top of an n-process asynchronous message-passing system
has received a lot of attention. It has been shown that t \textless{} n/2
(where t is the maximal number of processes that may crash) is a necessary and
sufficient requirement to build an atomic register on top of a crash-prone
asynchronous message-passing system. Considering such a context, this paper
visits the notion of a fast implementation of an atomic register, and presents
a new time-efficient asynchronous algorithm. Its time-efficiency is measured
according to two different underlying synchrony assumptions. Whatever this
assumption, a write operation always costs a round-trip delay, while a read
operation costs always a round-trip delay in favorable circumstances
(intuitively, when it is not concurrent with a write). When designing this
algorithm, the design spirit was to be as close as possible to the one of the
famous ABD algorithm (proposed by Attiya, Bar-Noy, and Dolev)
On the Optimal Space Complexity of Consensus for Anonymous Processes
The optimal space complexity of consensus in shared memory is a decades-old
open problem. For a system of processes, no algorithm is known that uses a
sublinear number of registers. However, the best known lower bound due to Fich,
Herlihy, and Shavit requires registers.
The special symmetric case of the problem where processes are anonymous (run
the same algorithm) has also attracted attention. Even in this case, the best
lower and upper bounds are still and . Moreover, Fich,
Herlihy, and Shavit first proved their lower bound for anonymous processes, and
then extended it to the general case. As such, resolving the anonymous case
might be a significant step towards understanding and solving the general
problem.
In this work, we show that in a system of anonymous processes, any consensus
algorithm satisfying nondeterministic solo termination has to use
read-write registers in some execution. This implies an lower bound
on the space complexity of deterministic obstruction-free and randomized
wait-free consensus, matching the upper bound and closing the symmetric case of
the open problem
Approximate Consensus in Highly Dynamic Networks: The Role of Averaging Algorithms
In this paper, we investigate the approximate consensus problem in highly
dynamic networks in which topology may change continually and unpredictably. We
prove that in both synchronous and partially synchronous systems, approximate
consensus is solvable if and only if the communication graph in each round has
a rooted spanning tree, i.e., there is a coordinator at each time. The striking
point in this result is that the coordinator is not required to be unique and
can change arbitrarily from round to round. Interestingly, the class of
averaging algorithms, which are memoryless and require no process identifiers,
entirely captures the solvability issue of approximate consensus in that the
problem is solvable if and only if it can be solved using any averaging
algorithm. Concerning the time complexity of averaging algorithms, we show that
approximate consensus can be achieved with precision of in a
coordinated network model in synchronous
rounds, and in rounds when
the maximum round delay for a message to be delivered is . While in
general, an upper bound on the time complexity of averaging algorithms has to
be exponential, we investigate various network models in which this exponential
bound in the number of nodes reduces to a polynomial bound. We apply our
results to networked systems with a fixed topology and classical benign fault
models, and deduce both known and new results for approximate consensus in
these systems. In particular, we show that for solving approximate consensus, a
complete network can tolerate up to 2n-3 arbitrarily located link faults at
every round, in contrast with the impossibility result established by Santoro
and Widmayer (STACS '89) showing that exact consensus is not solvable with n-1
link faults per round originating from the same node
Monotonic Prefix Consistency in Distributed Systems
We study the issue of data consistency in distributed systems. Specifically,
we consider a distributed system that replicates its data at multiple sites,
which is prone to partitions, and which is assumed to be available (in the
sense that queries are always eventually answered). In such a setting, strong
consistency, where all replicas of the system apply synchronously every
operation, is not possible to implement. However, many weaker consistency
criteria that allow a greater number of behaviors than strong consistency, are
implementable in available distributed systems. We focus on determining the
strongest consistency criterion that can be implemented in a convergent and
available distributed system that tolerates partitions. We focus on objects
where the set of operations can be split into updates and queries. We show that
no criterion stronger than Monotonic Prefix Consistency (MPC) can be
implemented.Comment: Submitted pape
Folding of Hyperbolic Manifolds
Abstract In this paper, we introduce the definition of hyperbolic manifold. The folding of hyperbolic manifold into itself is defined and discussed. Types of these foldings are deduced. Theorems governing these types are achieved. Mathematics Subject Classification, 51H10, 57N2
Two-Bit Messages are Sufficient to Implement Atomic Read/Write Registers in Crash-prone Systems
Atomic registers are certainly the most basic objects of computing science.
Their implementation on top of an n-process asynchronous message-passing system
has received a lot of attention. It has been shown that t \textless{} n/2
(where t is the maximal number of processes that may crash) is a necessary and
sufficient requirement to build an atomic register on top of a crash-prone
asynchronous message-passing system. Considering such a context, this paper
presents an algorithm which implements a single-writer multi-reader atomic
register with four message types only, and where no message needs to carry
control information in addition to its type. Hence, two bits are sufficient to
capture all the control information carried by all the implementation messages.
Moreover, the messages of two types need to carry a data value while the
messages of the two other types carry no value at all. As far as we know, this
algorithm is the first with such an optimality property on the size of control
information carried by messages. It is also particularly efficient from a time
complexity point of view
On the Importance of Registers for Computability
All consensus hierarchies in the literature assume that we have, in addition
to copies of a given object, an unbounded number of registers. But why do we
really need these registers?
This paper considers what would happen if one attempts to solve consensus
using various objects but without any registers. We show that under a
reasonable assumption, objects like queues and stacks cannot emulate the
missing registers. We also show that, perhaps surprisingly, initialization,
shown to have no computational consequences when registers are readily
available, is crucial in determining the synchronization power of objects when
no registers are allowed. Finally, we show that without registers, the number
of available objects affects the level of consensus that can be solved.
Our work thus raises the question of whether consensus hierarchies which
assume an unbounded number of registers truly capture synchronization power,
and begins a line of research aimed at better understanding the interaction
between read-write memory and the powerful synchronization operations available
on modern architectures.Comment: 12 pages, 0 figure
Milled Iraqi Phoenix Dactylifera Date Palm Pruning Woods Lignin Qualitative and Quantitative Determination
This study aimed to find analytical data base for Iraqi phoenix date palm pruning woods. Lignin has been extracted for five types of Iraqi date palm using Klason lignin method. Weight of extracted lignin ranged from ( 0.350 g â 0.698 g), and lignin % ranged from (17.5 â 34.9). (waxes, oils, resin, and proteins of wood gums) % ranged from (22.5 â 44.5). FTâ IR Characterization showed that the (-OH) phenolic dis appear in all studied lignin samples, and the (4-O-5 inter monomeric lignin linkage) showed strong intensity peaks for Khadrawi, and Jamal AL-Deen samples, and moderate intensities for Maktom, Barhi at, and Fahal. Also (DODO inter monomeric lignin linkage) showed strong intensity peaks for all studied samples. UV â Vis. Characterization showed that the lowest absorption maximum (254 nm) corresponds to Fahal lignin sample, While the highest absorption maximum (275 nm) corresponds to Jamal AL-Deen lignin sample. Keywords: Milled Iraqi Phoenix, pruning woods, lignin, Quantitative Determination
- âŠ